-
Notifications
You must be signed in to change notification settings - Fork 2.5k
fix: preserve context by retrying stateless on invalid previous_response_id (Responses API) #7709
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
Fixes a regression in the OpenAI Responses API where conversation context was lost when retrying after an invalid previous_response_id error. The bug occurred because retries removed the ID but didn't re-prepare the input to include the full conversation, sending only the latest message instead.
- Enhanced error detection to catch all previous_response_id related 400 errors
- Modified retry logic to re-prepare the full conversation when falling back from stateful to stateless mode
- Added comprehensive test coverage to verify the fix works on both SDK and SSE code paths
Reviewed Changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 2 comments.
| File | Description |
|---|---|
| src/api/providers/openai-native.ts | Enhanced retry logic to preserve context by re-preparing full conversation on previous_response_id failures |
| src/api/providers/tests/openai-native.spec.ts | Added regression test validating proper context preservation during retry scenarios |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
| if (systemPrompt && messages) { | ||
| // Re-prepare input without previous_response_id (will send full conversation) | ||
| const { formattedInput } = this.prepareStructuredInput(systemPrompt, messages, undefined) | ||
| retryRequestBody.input = formattedInput |
Copilot
AI
Sep 5, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider adding an else clause to handle the case where systemPrompt or messages are undefined. Without proper fallback handling, the retry might still send incomplete data if these parameters are missing.
| if (systemPrompt && messages) { | |
| // Re-prepare input without previous_response_id (will send full conversation) | |
| const { formattedInput } = this.prepareStructuredInput(systemPrompt, messages, undefined) | |
| retryRequestBody.input = formattedInput | |
| retryRequestBody.input = formattedInput | |
| } else { | |
| // If we don't have the necessary data, fall back to SSE immediately | |
| yield* this.makeGpt5ResponsesAPIRequest( | |
| retryRequestBody, | |
| model, | |
| metadata, | |
| systemPrompt, | |
| messages, | |
| ) | |
| return |
| if (systemPrompt && messages) { | ||
| // Re-prepare input without previous_response_id (will send full conversation) | ||
| // Note: We pass undefined metadata to prepareStructuredInput to ensure it doesn't use previousResponseId | ||
| const { formattedInput } = this.prepareStructuredInput(systemPrompt, messages, undefined) | ||
|
|
||
| // Update the input in the retry request body to include full conversation | ||
| retryRequestBody.input = formattedInput | ||
| } |
Copilot
AI
Sep 5, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This logic is duplicated from lines 315-319. Consider extracting this retry preparation logic into a separate method to avoid code duplication and improve maintainability.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for your contribution! I've reviewed the changes and found that this fix effectively addresses the regression where conversation context was lost after continuity failures. The solution is well-implemented with good test coverage. I have some suggestions for improvement below.
| // Clear the stored lastResponseId to prevent using it again | ||
| this.lastResponseId = undefined | ||
| // Re-prepare the input to send full conversation if we have the necessary data | ||
| if (systemPrompt && messages) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I notice there's duplicate retry logic here and in makeGpt5ResponsesAPIRequest() (lines 487-531). Could we extract this into a shared helper method to reduce duplication and ensure consistency?
Something like:
| if (is400Error && requestBody.previous_response_id && isPreviousResponseError) { | ||
| // Log the error and retry without the previous_response_id | ||
| // Clear the stored lastResponseId to prevent using it again | ||
| this.lastResponseId = undefined |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider adding debug logging here when a retry occurs due to invalid previous_response_id. This would be helpful for monitoring and debugging in production environments.
| role: "user", | ||
| content: [{ type: "input_text", text: "Latest message" }], | ||
| }, | ||
| ]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great test coverage for the main scenario! Consider adding edge case tests for:
- What happens if the retry also fails with a different error?
- Behavior when systemPrompt or messages are undefined during retry
This would ensure the fix is robust in all scenarios.
|
Closing this PR to implement a cleaner solution that sends the full conversation on retry when previous_response_id fails. |
Related GitHub Issue
No linked issue; internal bug fix. Please link the appropriate tracking issue.
Roo Code Task Context (Optional)
No Roo Code task context for this PR
Description
This PR fixes a regression in the OpenAI Responses API provider that caused conversation context to be lost after a continuity failure.
What failed:
What this fixes:
Why it matters:
Key changes:
Test Procedure
Automated:
Manual (optional):
Pre-Submission Checklist
Screenshots / Videos
No UI changes in this PR
Documentation Updates
Additional Notes
Get in Touch
@your-discord-handle
Important
Fixes context loss in OpenAI Responses API by retrying with full conversation on invalid
previous_response_id.OpenAiNativeHandlerby retrying in stateless mode whenprevious_response_idis invalid.lastResponseIdand re-prepares input with full conversation on 400 errors related toprevious_response_id.executeRequest()andmakeGpt5ResponsesAPIRequest()inopenai-native.ts.openai-native.spec.tsto validate retry logic.previous_response_id.This description was created by
for 8093615. You can customize this summary. It will automatically update as commits are pushed.